S1: Welcome in San Diego. It's Jade Hindman on today's show. The privacy issues being raised around law enforcement use of AI. This is KPBS Midday Edition. Connecting our communities through conversation. Artificial intelligence is everywhere. It's used at work , in schools and even our government agencies. The Chula Vista Police Department is joining cities across the country using AI to write police reports. And several San Diego County police departments are using AI powered drones to support their work. All of this is bringing up questions about privacy and surveillance. So we wanted to have a conversation about the ethics of AI in public agencies. Joining me now is David Danks. He's a professor of data science , philosophy and policy at UC San Diego. Professor , welcome back to Midday Edition.
S2: Thanks so much for having me.
S1: Glad to have you here.
S3: Um , you know , the single biggest issue is that public facing services are intended exactly to serve the public. And AI removes a human from that change. Suddenly , there isn't necessarily going to be a human who's able to adjust to the individual circumstances of the moment. So what we really worry about when AI moves into the public sphere is that we lose the ability to adjust to unusual circumstances , but also that some of our past historical practices , things that might have led to bias or to increase surveillance and intrusion on our privacy , that those are going to be supercharged by AI , precisely because you don't have to hire any more people to do it. Once you've got one AI , you can use it everywhere , as opposed to humans who take time to train. Hmm.
S1: Hmm. And tell me more about that.
S3: AI is in many ways , I think , best thought of as a reflection of us humans. We train AI systems using data collected from people , whether it's large language models being trained on all of the texts that we've written , or if we think about a police surveillance AI that's trained on past policing practices. So what the AI has learned to do is in many ways to mirror back to us whatever our own past behavior has been , so we can tweak things a little bit. We can improve the AI to make it so it isn't quite the way that we have been historically. But if our data are coming from , say , a time , if we're thinking about something like mortgage lending , if we think about the days of redlining , when there were systematic biases baked into the mortgage lending practices , eyes are going to learn to replicate those biases. And so the same thing happens with public services , which historically in many municipalities have not necessarily been distributed equitably or evenly across the population in some jurisdiction.
S1:
S3: AI not only can do a little bit better than humans , if we're very careful about how we build and use it , it. Also , we can often look to understand why the AI gave us the recommendation that it did. Humans can't always explain why we made the decisions that we did , but AI usually can. So I think in those ways we can make things better. But I also think we need to be realistic that we're never going to get to a world in which AI is entirely error free , or it does exactly what everybody would want , in part because there may not be any one thing that everybody wants. Hmm.
S1: Hmm. Well , while flawed , one argument for using AI in policing is to improve police burnout by automating systems.
S3: The challenge. For example , you could have AI that could produce a summary from a body cam footage of an interaction with with a citizen. Now , the tricky part about that is that we expect that the AI is going to miss certain important things. It might miss the nuance in somebody's tone of voice. And so it's going to be critical as these systems are rolled out to ensure that a human , the human police officer in this case is always checking the results and actually checking that the results are accurate , not just clicking so they can go home. And that's a people problem. That's not an AI problem. That's about how do we make sure that our institutions are strong enough to hold people accountable. If they do sign off on something without actually checking it , that they actually are thinking carefully about whether the report that's provided by the AI can is a good summary of what they went through. Most of the time it will be , but occasionally it won't be , and we need to make sure that we have mechanisms in place to catch those.
S1:
S3: This is one of these areas where AI systems or early versions of AI systems have been rolled out in various companies dating back to the 1980s. So at this point , we have 30 plus years of experience of good practices to ensure that people are actually reading the reports. You can ask people to give just a brief comment about whether they think it's good or bad. You can have randomly highlighted sections that they have to , you know , make sure to confirm or that they have to make some kind of an editing. So you basically can force people to engage with it. But engaging with something that's already written is , of course , much faster than having to write it yourself from scratch. Hmm.
S1: Hmm. You know , there's also the issue of privacy here. AI has the ability to sort of quickly collect and synthesize large amounts of data , and that data is easily shared. For example , the El Cajon Police Department has received statewide scrutiny for sharing automated license plate reader data with outside agencies.
S3: But one of the nice things about AI is that it doesn't collect data unless you tell it to. It doesn't process data unless you tell it to. So on the one hand , I do worry a lot about the privacy implications of increased use of AI. We can have very widespread data collection across many people. And as you were saying , data can be integrated from many different sources at the same time. AI gives us a very clean way of enforcing the legal privacy restrictions that are often already in place. In many cases , when we look at privacy violations over the years , what we see is that there are people who have simply been unaware of the law or have misinterpreted the law. Whereas with AI , we can actually use the technology itself to enforce some of the privacy restrictions , whether it's by automatically blurring faces and video feeds , or by ensuring that data are deleted after a certain time or other mechanisms , often under the heading of what it's called differential privacy , which is a technical notion that can actually improve our privacy.
S1: AI , you know , is also being used in other professions like healthcare. What are some of the questions we have to contend with there.
S3: Well , of course , we have many of the same questions about privacy and surveillance. And similarly we have questions about accuracy and who is actually taking ownership and accountability if something goes wrong. I think in the health care case , it gets particularly challenging because as doctors increasingly are using or relying on AI systems , we should expect that to potentially , unless we do it in a very careful way , potentially start to further erode the patient doctor relationship that's already under so much strain today. If my doctor simply is reporting back whatever the AI says , then why did I need to go to the doctor in the first place ? And we know that a very strong predictor of positive health outcomes is how much you trust your doctor , how much you are willing to disclose to them. So personally , I worry a lot that as we see the rise of AI in healthcare , we're going to see an erosion of the kinds of trust that is at the core of so many of the doctor patient relationships. Hmm.
S1: Hmm.
S3: So it's about making sure that the AI that we produce and deploy , especially any AI that is supposed to support or provide public services , really needs to be tied to the needs of the people who are supposedly being served by it. So that puts a burden on those of us who build AI systems to actually go out and ask patients or individuals struggling with housing , or individuals engaging with the the police or the police officers themselves to understand what are your challenges , what are your values , what are your needs , and how can we provide AI that supports those rather than just guessing about what people might need. Hmm.
S1: Hmm. So , you know , regulation , there's a lot to be desired there.
S3: Part of the problem we have is that , um , right now , AI systems are increasingly being built in a very distributed way. So you'll have four or 5 or 6 companies that all are playing a role in ensuring that the AI system supposedly does what what we want it to do. When that goes wrong , though , that means that it can be very hard to assign responsibility , either in a legal or an ethical sense. So I think right now one of the big questions is under is for all of us to be able to understand how these systems are really built in the world , and then having policies in place to whether it's by the companies , you know , engaging in contractual Will negotiations with one another or through some sort of broader regulation , ensuring that there is actually somebody who can be held responsible , or some company that can be held responsible , rather than the responsibility just being diffused in a way that that no one is. Hmm.
S1: Hmm. Okay.
S3: Part of what it is to be effective is to maintain safety and security , but also maintain privacy and confidence and trust in the systems. I think sometimes there's this view that AI is a thing that gets built , and then after the fact , we can have it have these good features like the privacy preserving. But first you have to build an effective AI. I think effective just means you're supporting people in what it is that they need. You're supporting people in their everyday lives , and that just is to be thinking about all of these things from the very beginning. So I think what we really need is to rethink the ways that we talk about , but also those of us who build AI systems , the ways that we build them to center people at the very beginning , rather than people coming in only at the end.
S1: You know , you'd think it'd be obvious , but great advice there. I've been speaking with David Danks. He's a professor of data science , philosophy and policy at UC San Diego. Professor , thank you so much.
S3: Thanks for having me.
S1: That's our show for today. I'm your host , Jade Hindman. Thanks for tuning in to Midday Edition. Be sure to have a great day on purpose , everyone.